17 research outputs found

    Item Recommendation with Evolving User Preferences and Experience

    Full text link
    Current recommender systems exploit user and item similarities by collaborative filtering. Some advanced methods also consider the temporal evolution of item ratings as a global background process. However, all prior methods disregard the individual evolution of a user's experience level and how this is expressed in the user's writing in a review community. In this paper, we model the joint evolution of user experience, interest in specific item facets, writing style, and rating behavior. This way we can generate individual recommendations that take into account the user's maturity level (e.g., recommending art movies rather than blockbusters for a cinematography expert). As only item ratings and review texts are observables, we capture the user's experience and interests in a latent model learned from her reviews, vocabulary and writing style. We develop a generative HMM-LDA model to trace user evolution, where the Hidden Markov Model (HMM) traces her latent experience progressing over time -- with solely user reviews and ratings as observables over time. The facets of a user's interest are drawn from a Latent Dirichlet Allocation (LDA) model derived from her reviews, as a function of her (again latent) experience level. In experiments with five real-world datasets, we show that our model improves the rating prediction over state-of-the-art baselines, by a substantial margin. We also show, in a use-case study, that our model performs well in the assessment of user experience levels

    $1.00 per RT #BostonMarathon #PrayForBoston: analyzing fake content on Twitter

    Get PDF
    This study found that 29% of the most viral content on Twitter during the Boston bombing crisis were rumors and fake content.AbstractOnline social media has emerged as one of the prominent channels for dissemination of information during real world events. Malicious content is posted online during events, which can result in damage, chaos and monetary losses in the real world. We analyzed one such media i.e. Twitter, for content generated during the event of Boston Marathon Blasts, that occurred on April, 15th, 2013. A lot of fake content and malicious profiles originated on Twitter network during this event. The aim of this work is to perform in-depth characterization of what factors influenced in malicious content and profiles becoming viral. Our results showed that 29% of the most viral content on Twitter, during the Boston crisis were rumors and fake content; while 51% was generic opinions and comments; and rest was true information. We found that large number of users with high social reputation and verified accounts were responsible for spreading the fake content. Next, we used regression prediction model, to verify that, overall impact of all users who propagate the fake content at a given time, can be used to estimate the growth of that content in future. Many malicious accounts were created on Twitter during the Boston event, that were later suspended by Twitter. We identified over six thousand such user profiles, we observed that the creation of such profiles surged considerably right after the blasts occurred. We identified closed community structure and star formation in the interaction network of these suspended profiles amongst themselves

    Explainable Machine Learning for Public Policy: Use Cases, Gaps, and Research Directions

    Full text link
    Explainability is a crucial requirement for effectiveness as well as the adoption of Machine Learning (ML) models supporting decisions in high-stakes public policy areas such as health, criminal justice, education, and employment, While the field of explainable has expanded in recent years, much of this work has not taken real-world needs into account. A majority of proposed methods use benchmark datasets with generic explainability goals without clear use-cases or intended end-users. As a result, the applicability and effectiveness of this large body of theoretical and methodological work on real-world applications is unclear. This paper focuses on filling this void for the domain of public policy. We develop a taxonomy of explainability use-cases within public policy problems; for each use-case, we define the end-users of explanations and the specific goals explainability has to fulfill; third, we map existing work to these use-cases, identify gaps, and propose research directions to fill those gaps in order to have a practical societal impact through ML.Comment: Submitted for review at Communications of the AC

    Driving The Last Mile: Characterizing and Understanding Distracted Driving Posts on Social Networks

    Full text link
    In 2015, 391,000 people were injured due to distracted driving in the US. One of the major reasons behind distracted driving is the use of cell-phones, accounting for 14% of fatal crashes. Social media applications have enabled users to stay connected, however, the use of such applications while driving could have serious repercussions -- often leading the user to be distracted from the road and ending up in an accident. In the context of impression management, it has been discovered that individuals often take a risk (such as teens smoking cigarettes, indulging in narcotics, and participating in unsafe sex) to improve their social standing. Therefore, viewing the phenomena of posting distracted driving posts under the lens of self-presentation, it can be hypothesized that users often indulge in risk-taking behavior on social media to improve their impression among their peers. In this paper, we first try to understand the severity of such social-media-based distractions by analyzing the content posted on a popular social media site where the user is driving and is also simultaneously creating content. To this end, we build a deep learning classifier to identify publicly posted content on social media that involves the user driving. Furthermore, a framework proposed to understand factors behind voluntary risk-taking activity observes that younger individuals are more willing to perform such activities, and men (as opposed to women) are more inclined to take risks. Grounding our observations in this framework, we test these hypotheses on 173 cities across the world. We conduct spatial and temporal analysis on a city-level and understand how distracted driving content posting behavior changes due to varied demographics. We discover that the factors put forth by the framework are significant in estimating the extent of such behavior.Comment: Accepted at International Conference on Web and Social Media (ICWSM) 2020; 12 page

    Explainable machine learning for public policy: Use cases, gaps, and research directions

    Get PDF
    Explainability is highly desired in machine learning (ML) systems supporting high-stakes policy decisions in areas such as health, criminal justice, education, and employment. While the field of explainable ML has expanded in recent years, much of this work has not taken real-world needs into account. A majority of proposed methods are designed with generic explainability goals without well-defined use cases or intended end users and evaluated on simplified tasks, benchmark problems/datasets, or with proxy users (e.g., Amazon Mechanical Turk). We argue that these simplified evaluation settings do not capture the nuances and complexities of real-world applications. As a result, the applicability and effectiveness of this large body of theoretical and methodological work in real-world applications are unclear. In this work, we take steps toward addressing this gap for the domain of public policy. First, we identify the primary use cases of explainable ML within public policy problems. For each use case, we define the end users of explanations and the specific goals the explanations have to fulfill. Finally, we map existing work in explainable ML to these use cases, identify gaps in established capabilities, and propose research directions to fill those gaps to have a practical societal impact through ML. The contribution is (a) a methodology for explainable ML researchers to identify use cases and develop methods targeted at them and (b) using that methodology for the domain of public policy and giving an example for the researchers on developing explainable ML methods that result in real-world impact

    Modeling User Behavior on Socio-Technical Systems: Patterns and Anomalies

    No full text
    How can we model user behavior on social media platforms and social networking websites? How can we use such models to characterize behavior on social mediaand infer about human behavior and preferences at scale? Specifically, how can we describe users that indulge in posting about risk-taking behavior on social media ormobilize against a particular entity in a firestorm event on Twitter? Online social network platforms (e.g. Facebook, Twitter, Snapchat, Yelp) provide means for users to express themselves, by posting content in the form of imagesand videos. These platforms allow users to not only interact with content (liking, commenting) but also to other users (social connections, chatting) and items (throughratings and reviews), thus providing rich data with huge potential for mining unexplored and useful patterns. The availability of such data opens up unique opportunitiesto understand and model nuances of how users interact with such socio-technical systems, while also contributing novel algorithms that can predict genuine user behaviorand also detect malicious entities at such a large scale.In this dissertation, we focus on two broad topics - (a) understanding user behavior on social media platforms and (b) detecting fraudulent activities on these platforms.For the first part, we concentrate on user behavior in two different settings - (i) individual user behavior, where we analyze behavior of actions taken at individualscale for example modeling how does individual’s expertise in e-commerce systems (such as wine rating, movie rating) evolve over time? and how can that be used torecommend the next product? The second sub-part (ii) focusses on user-based phenomena, where multiple users are analyzed collectively to discover an interestingphenomena, for example what are the characteristics of communication pattern between users participating in a firestorm event. In the second setting, we tackle theproblem of detecting fraudulent activities on social media platforms. We solve two related sub-themes in the problem area, in the first sub area, we characterize variousfraudulent activities on social media platforms and propose anomaly detection models to identify fraudulent users and activities. For the next sub-area we propose models that are not only confined to social media platforms, but can also beextended to general settings. Overall, this thesis looks at two closely related problems i.e. modeling user behavior on social media platforms, and then using similarlygenerated models to detect abnormal and potentially fraudulent behavior

    Experience-Aware Item Recommendation in Evolving Review Communities

    No full text
    Abstract—Current recommender systems exploit user and item similarities by collaborative filtering. Some advanced methods also consider the temporal evolution of item ratings as a global background process. However, all prior methods disregard the individual evolution of a user’s experience level and how this is expressed in the user’s writing in a review community. In this paper, we model the joint evolution of user experience, interest in specific item facets, writing style, and rating behavior. This way we can generate individual recommendations that take into account the user’s maturity level (e.g., recommending art movies rather than blockbusters for a cinematography expert). As only item ratings and review texts are observables, we capture the user’s experience and interests in a latent model learned from her reviews, vocabulary and writing style. We develop a generative HMM-LDA model to trace user evolution, where the Hidden Markov Model (HMM) traces her latent experience progressing over time — with solely user reviews and ratings as observables over time. The facets of a user’s interest are drawn from a Latent Dirichlet Allocation (LDA) model derived from her reviews, as a function of her (again latent) experience level. In experiments with four real-world datasets, we show that our model improves the rating prediction over state-of-the-art baselines, by a substantial margin. In addition, our model can also give some interpretations for the user experience level. I

    TellTail: Fast Scoring and Detection of Dense Subgraphs

    No full text
    Suppose you visit an e-commerce site, and see that 50 users each reviewed almost all of the same 500 products several times each: would you get suspicious? Similarly, given a Twitter follow graph, how can we design principled measures for identifying surprisingly dense subgraphs? Dense subgraphs often indicate interesting structure, such as network attacks in network traffic graphs. However, most existing dense subgraph measures either do not model normal variation, or model it using an Erdős-Renyi assumption - but this assumption has been discredited decades ago. What is the right assumption then? We propose a novel application of extreme value theory to the dense subgraph problem, which allows us to propose measures and algorithms which evaluate the surprisingness of a subgraph probabilistically, without requiring restrictive assumptions (e.g. Erdős-Renyi). We then improve the practicality of our approach by incorporating empirical observations about dense subgraph patterns in real graphs, and by proposing a fast pruning-based search algorithm. Our approach (a) provides theoretical guarantees of consistency, (b) scales quasi-linearly, and (c) outperforms baselines in synthetic and ground truth settings
    corecore